Binary Neural Networks (BNNs) are showing tremendous success on realistic image classification tasks. Notably, their accuracy is similar to the state-of-the-art accuracy obtained by full-precision models tailored to edge devices. In this regard, BNNs are very amenable to edge devices since they employ 1-bit to store the inputs and weights, and thus, their storage requirements are low. Also, BNNs computations are mainly done using xnor and pop-counts operations which are implemented very efficiently using simple hardware structures. Nonetheless, supporting BNNs efficiently on mobile CPUs is far from trivial since their benefits are hindered by frequent memory accesses to load weights and inputs. In BNNs, a weight or an input is stored using one bit, and aiming to increase storage and computation efficiency, several of them are packed together as a sequence of bits. In this work, we observe that the number of unique sequences representing a set of weights is typically low. Also, we have seen that during the evaluation of a BNN layer, a small group of unique sequences is employed more frequently than others. Accordingly, we propose exploiting this observation by using Huffman Encoding to encode the bit sequences and then using an indirection table to decode them during the BNN evaluation. Also, we propose a clustering scheme to identify the most common sequences of bits and replace the less common ones with some similar common sequences. Hence, we decrease the storage requirements and memory accesses since common sequences are encoded with fewer bits. We extend a mobile CPU by adding a small hardware structure that can efficiently cache and decode the compressed sequence of bits. We evaluate our scheme using the ReAacNet model with the Imagenet dataset. Our experimental results show that our technique can reduce memory requirement by 1.32x and improve performance by 1.35x.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
我们介绍了仇恨言论推文的Hateval语料库(Basile等,2019年)的丰富,旨在促进自动化的反叙事一代。与以前的工作相比(Chung etal。2019),手动书面反叙事与推文有关。但是,仅此信息似乎不足以获得反叙事生成的令人满意的语言模型。这就是为什么我们还根据Wagemanns(2016)提供了带有争论性信息的注释推文,我们认为可以帮助建立令人信服和有效的反叙事,以针对特定群体进行仇恨言论。我们讨论了这种注释过程的充分和困难,并提出了几个基线以自动检测带注释的元素。初步结果表明,自动注释者会靠近人类注释者来检测论证的某些方面,而其他人仅达到低或中等水平的通知者一致性。
translated by 谷歌翻译
可以使用具有快速有效分割网络的深度学习方法来实施医疗图像分割。单板计算机(SBC)由于内存和处理限制而难以用于训练深网。诸如Google Edge TPU之类的特定硬件使其适合使用复杂的预训练网络进行实时预测。在这项工作中,我们研究了两个SBC的性能,具有和不进行硬件加速度进行底面图像分割,尽管这项研究的结论可以通过其他类型的医学图像的深层神经网络应用于分割。为了测试硬件加速的好处,我们使用先前已发布的工作中的网络和数据集,并通过使用具有超声甲状腺图像的数据集进行测试来概括它们。我们在SBC中测量预测时间,并将其与基于云的TPU系统进行比较。结果表明,使用Edge TPU,机器学习加速SBC的可行性可加速光盘和杯赛分段,每图像可获得低于25毫秒的时间。
translated by 谷歌翻译
该报告说明了基于音频和视频数据的最成功的AAL应用程序和功能的艺术状态,即(i)生命式和自我监控,(ii)对生命体征的远程监控,(iii)情绪状态识别,((iv)食物摄入量监测,活动和行为认识,(v)活动和个人帮助,(vi)手势识别,(vii)秋季检测和预防,(viii)移动性评估和脆弱的识别以及(IX)认知和运动康复。对于这些应用程序方案,该报告说明了科学进步,可用产品和研究项目的状态。开放的挑战也被突出显示。
translated by 谷歌翻译
由于超声图像中的成像伪影和低信噪比,自动骨表面分割网络通常会产生碎片的预测,从而阻碍超声引导的计算机辅助手术程序的成功。由于缺乏执行连通性的监督,现有的像素预测通常无法捕获骨组织的准确拓扑。在这项工作中,我们提出了一个定向引导的图形卷积网络,以改善连通性,同时分割骨表面。我们还提出了有关骨表面方向的额外监督,以进一步施加连通性。我们在1042 Vivo US扫描股骨,膝盖,脊柱和远端半径上验证了我们的方法。我们的方法将最新方法的连通性指标提高了5.01%。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
从图像中删除像雨,雾和雪一样的恶劣天气条件是许多应用中的重要问题。在文献中提出的大多数方法旨在处理只是去除一种劣化。最近,建议使用神经架构搜索的基于CNN的方法(一体化),以一次去除所有天气条件。但是,它具有大量参数,因为它使用多个编码器来满足每个天气删除任务,并且仍然具有改进其性能的范围。在这项工作中,我们专注于开发一个有效的解决方案,以了解所有恶劣的恶劣气象删除问题。为此,我们提出了一个基于变压器的端到端模型的Transweather,只需一个编码器和可通过任何天气状况恢复图像恢复的解码器。具体地,我们利用了一种使用内部变压器块的新型变压器编码器,以增强贴片内的注意力,以有效地消除较小的天气降级。我们还介绍了一个具有学习天气型嵌入的变压器解码器,可调整​​手头的天气降级。 Transweather通过一体化网络以及针对特定任务的微调的方法跨越多个测试数据集的显着改进。特别是,Transweather在Test1(Rain + Fog)DataSet上的当前最先进的最新状态将+6.34 PSNR推动雪橇上的Test1(Rain + Fog)DataSet +4.93 PSNR和rainDrop测试数据集上的+3.11 psnr。近天气天气也在现实世界测试图像上验证,发现比以前的方法更有效。可以在https://github.com/jeya-maria-jose/transweather访问实施代码和预先训练的权重。
translated by 谷歌翻译
当检测较小,不清楚或具有模糊边缘时的阴影区域时,电流阴影检测方法表现不佳。在这项工作中,我们试图在两个前面解决这个问题。首先,我们提出了一个精细的上下文感知阴影检测网络(FCSD-NET),在那里我们约束接收字段大小并专注于低级功能以学习精细上下文的功能更好。其次,我们提出了一种新的学习策略,称为恢复来检测(R2D),在那里我们表明,当深度神经网络训练恢复时(暗影删除),它也会学习有意义的功能来描绘阴影面具。为了利用阴影检测和删除任务的这种互补性,我们培训辅助网络进行影子拆卸,并提出互补特征学习块(CFL),以从阴影清除网络到阴影检测网络学习和融合有意义的功能。我们使用多个数据集的R2D学习策略培训所提出的网络FCSD-Net。三个公共影子检测数据集(ISTD,SBU和UCF)的实验结果表明,与其他最近的方法相比,我们的方法能够更好地检测到微观上下文的同时提高阴影检测性能。
translated by 谷歌翻译
In image fusion, images obtained from different sensors are fused to generate a single image with enhanced information. In recent years, state-of-the-art methods have adopted Convolution Neural Networks (CNNs) to encode meaningful features for image fusion. Specifically, CNN-based methods perform image fusion by fusing local features. However, they do not consider long-range dependencies that are present in the image. Transformer-based models are designed to overcome this by modeling the long-range dependencies with the help of self-attention mechanism. This motivates us to propose a novel Image Fusion Transformer (IFT) where we develop a transformer-based multi-scale fusion strategy that attends to both local and long-range information (or global context). The proposed method follows a two-stage training approach. In the first stage, we train an auto-encoder to extract deep features at multiple scales. In the second stage, multi-scale features are fused using a Spatio-Transformer (ST) fusion strategy. The ST fusion blocks are comprised of a CNN and a transformer branch which capture local and long-range features, respectively. Extensive experiments on multiple benchmark datasets show that the proposed method performs better than many competitive fusion algorithms. Furthermore, we show the effectiveness of the proposed ST fusion strategy with an ablation analysis. The source code is available at: https://github.com/Vibashan/Image-Fusion-Transformer.
translated by 谷歌翻译